25 research outputs found

    Computing threshold functions using dendrites

    Full text link
    Neurons, modeled as linear threshold unit (LTU), can in theory compute all thresh- old functions. In practice, however, some of these functions require synaptic weights of arbitrary large precision. We show here that dendrites can alleviate this requirement. We introduce here the non-Linear Threshold Unit (nLTU) that integrates synaptic input sub-linearly within distinct subunits to take into account local saturation in dendrites. We systematically search parameter space of the nTLU and TLU to compare them. Firstly, this shows that the nLTU can compute all threshold functions with smaller precision weights than the LTU. Secondly, we show that a nLTU can compute significantly more functions than a LTU when an input can only make a single synapse. This work paves the way for a new generation of network made of nLTU with binary synapses.Comment: 5 pages 3 figure

    Hippocampal replays under the scrutiny of reinforcement learning models

    Get PDF
    International audienceMultiple in vivo studies have shown that place cells from the hippocampus replay previously experienced trajectories. These replays are commonly considered to mainly reflect memory consolidation processes. Some data, however , have highlighted a functional link between replays and reinforcement learning (RL). This theory, extensively used in machine learning, has introduced efficient algorithms and can explain various behavioral and physiological measures from different brain regions. RL algorithms could constitute a mechanistic description of replays, and explain how replays can reduce the number of iterations required to explore the environment during learning. We review here the main findings concerning the different hippocampal replay types and the possible associated RL models (either model-based, model-free or hybrid model types). We conclude by tying these frameworks together. We illustrate the link between data and RL through a series of model simulations. This review, at the frontier between informatics and biology, paves the way for future work on replays

    Modelling human choices: MADeM and decision‑making

    Get PDF
    Research supported by FAPESP 2015/50122-0 and DFG-GRTK 1740/2. RP and AR are also part of the Research, Innovation and Dissemination Center for Neuromathematics FAPESP grant (2013/07699-0). RP is supported by a FAPESP scholarship (2013/25667-8). ACR is partially supported by a CNPq fellowship (grant 306251/2014-0)

    Spiking and saturating dendrites differentially expand single neuron computation capacity.

    No full text
    The integration of excitatory inputs in dendrites is non-linear: multiple excitatory inputs can produce a local depolarization departing from the arithmetic sum of each input’s response taken separately. If this depolarization is bigger than the arithmetic sum, the dendrite is spiking; if the depolarization is smaller, the dendrite is saturating. Decomposing a dendritic tree into independent dendritic spiking units greatly extends its computational capacity, as the neuron then maps onto a two layer neural network, enabling it to compute linearly non-separable Boolean functions (lnBFs). How can these lnBFs be implemented by dendritic architectures in practise? And can saturating dendrites equally expand computational capacity? To address these questions we use a binary neuron model and Boolean algebra. First, we confirm that spiking dendrites enable a neuron to compute lnBFs using an architecture based on the disjunctive normal form (DNF). Second, we prove that saturating dendrites as well as spiking dendrites enable a neuron to compute lnBFs using an architecture based on the conjunctive normal form (CNF). Contrary to a DNF-based architecture, in a CNF-based architecture, dendritic unit tunings do not imply the neuron tuning, as has been observed experimentally. Third, we show that one cannot use a DNF-based architecture with saturating dendrites. Consequently, we show that an important family of lnBFs implemented with a CNF-architecture can require an exponential number of saturating dendritic units, whereas the same family implemented with either a DNF-architecture or a CNF-architecture always require a linear number of spiking dendritic units. This minimization could explain why a neuron spends energetic resources to make its dendrites spike.

    [Re] Non-Additive Coupling Enables Propagation of Synchronous Spiking Activity in Purely Random Networks

    No full text
    A reference implementation of → Non-additive coupling enables propagation of synchronous spiking activity in purely random networks, R.M. Memmesheimer, M. Timme, PLoS Computational Biology, 8(4): e1002384, 2012International audienceDendritic non-linearities increase neurons’ computation capacity, turning them intocomplex computing units [4]. However, network studies are usually based on point neuron models that do not incorporate dendrites and their non-linearities. In contrast, the study replicated here [2] uses a simple point-neuron model that contains an effective description of dendrites by a non-linear summation of its excitatory synaptic input. Due to the simplicity of the model, both large-scale parameter exploration of a medium-sized network, as well as an analytical investigation of its properties are feasible. The original study was based on simulation and analysis code in C and Mathematica, but this code is not publicly available. Here, we replicate the study using the neural simulator Brian 2 [1,6], a simulator based on the Python language that has become a common choice in computational neuroscience [3]. This simulator offers a good trade-off between flexibility and performance and is therefore a suitable choice for this study of a non-standard neuron model

    On exploiting the synaptic interaction properties to obtain frequency-specific neurons

    No full text
    International audienceEnergy consumption remains the main limiting factors in many IoT applications. In particular, micro-controllers consume far too much power. In order to overcome this problem, new circuit designs have been proposed and the use of spiking neurons and analog computing has emerged as it allows a very significant consumption reduction. However, working in the analog domain brings difficulty to handle the sequential processing of incoming signals as is needed in many use cases. In this paper, we use a bio-inspired phenomenon called Interacting Synapses to produce a time filter, without using non-biological techniques such as synaptic delays. We propose a model of neuron and synapses that fire for a specific range of delays between two incoming spikes, but do not react when this Inter-Spike Timing is not in that range. We study the parameters of the model to understand how to choose them and adapt the Inter-Spike Timing. The originality of the paper is to propose a new way, in the analog domain, to deal with temporal sequences

    A robust model of sensory tuning using dendritic non-linearities

    No full text

    Demonstration that sublinear dendrites enable linearly non-separable computations

    No full text
    ABSTRACT Theory predicts that nonlinear summation of synaptic potentials within dendrites allows neurons to perform linearly non-separable computations (LNSCs). Using Boolean analysis approaches, we predicted that both supralinear and sublinear synaptic summation could allow single neurons to implement a type of LNSC, the feature binding problem (FBP), which does not require inhibition contrary to the XOR. Notably, sublinear dendritic operations enable LNSCs when scattered synaptic activation generates increased somatic spike output. However, experimental demonstrations of scatter-sensitive neuronal computations have not yet been described. Using glutamate uncaging onto cerebellar molecular layer interneurons, we show that scattered synaptic-like activation of dendrites evoked larger compound EPSPs than clustered synaptic activation, generating a higher output spiking probability. Moreover, we also demonstrate that single interneurons can indeed implement the FBP. We use a biophysical model to predict under what conditions a neuron can implement the FBP and what leads to failures. Experimental results agree with the model-determined conditions and hence validate our protocol as a solid benchmark for a neuron to implement linearly non-separable computations. Since sublinear synaptic summation is a property of passive dendrites we expect that many different neuron types can implement LNSCs
    corecore